Qualms concerning Tsallis’ Use of the Maximum Entropy Formalism

نویسنده

  • B. H. Lavenda
چکیده

Tsallis’ ‘statistical thermodynamic’ formulation of the nonadditive entropy of degree-α is neither correct nor self-consistent. It is well known that the maximum entropy formalism [1], the minimum discrimination information [2], and Gauss’ principle [3, 4] all lead to the same results when a certain condition on the prior probability distribution is imposed [5]. All these methods lead to the same form of the posterior probability distribution; namely, the exponential family of distributions. Tsallis and collaborators [6] have tried to adapt the maximum entropy formalism that uses the Shannon entropy to one that uses a nonadditive entropy of degree-α. In order to come out with analytic expressions for the probabilities that maximize the nonadditive entropy they found it necessary to use ‘escort probabilities’[7] of the same power as the nonadditive entropy. If the procedure they use is correct then it follows that Gauss’ principle should give the same optimum probabilities. Yet, we will find that the Tsallis result requires that the prior probability distribution be given by the same unphysical condition as the maximum entropy formalism and, what is worse, the potential of the error law be required to vanish. The potential of the error law is what information theory refers to as the error [8]; that is, the difference between the inaccuracy and the entropy. Unless the ‘true’ probability distribution, P = (p(x1), p(x2) . . . , p(xm)) coincides with the estimated probability distribution, Q = (q(x1), q(x2), . . . q(xm)), the error does not vanish. Moreover, we shall show that two procedures of averaging, one using the escort probabilities explicitly, do not give the same result, and the relation between the potential of the error law and the nonadditive entropy requires the latter to vanish when the former vanishes. Let X be a random variable whose values x1, x2, . . . , xm are obtained at m independent trials. Prior to the observations the distribution is Q, and after the observations the unknown probability distribution is P . The observer has

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tsallis Entropy and Conditional Tsallis Entropy of Fuzzy Partitions

The purpose of this study is to define the concepts of Tsallis entropy and conditional Tsallis entropy of fuzzy partitions and to obtain some results concerning this kind entropy. We show that the Tsallis entropy of fuzzy partitions has the subadditivity and concavity properties. We study this information measure under the refinement and zero mode subset relations. We check the chain rules for ...

متن کامل

Tsallis Maximum Entropy Lorenz Curves

In this paper, at first we derive a family of maximum Tsallis entropy distributions under optional side conditions on the mean income and the Gini index. Furthermore, corresponding with these distributions a family of Lorenz curves compatible with the optional side conditions is generated. Meanwhile, we show that our results reduce to Shannon entropy as $beta$ tends to one. Finally, by using ac...

متن کامل

The Rate of Entropy for Gaussian Processes

In this paper, we show that in order to obtain the Tsallis entropy rate for stochastic processes, we can use the limit of conditional entropy, as it was done for the case of Shannon and Renyi entropy rates. Using that we can obtain Tsallis entropy rate for stationary Gaussian processes. Finally, we derive the relation between Renyi, Shannon and Tsallis entropy rates for stationary Gaussian proc...

متن کامل

Tsallis Entropy and the Vlasov - Poisson Equations

We revisit Tsallis Maximum Entropy Solutions to the Vlasov-Poisson Equation describing gravitational N -body systems. We review their main characteristics and discuss their relationship with other applications of Tsallis statistics to systems with long range interactions. In the following considerations we shall be dealing with a D-dimensional space so as to be in a position to investigate poss...

متن کامل

Optimal Multi-Level Thresholding Based on Maximum Tsallis Entropy via an Artificial Bee Colony Approach

This paper proposes a global multi-level thresholding method for image segmentation. As a criterion for this, the traditional method uses the Shannon entropy, originated from information theory, considering the gray level image histogram as a probability distribution, while we applied the Tsallis entropy as a general information theory entropy formalism. For the algorithm, we used the artificia...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003